36 research outputs found

    LoCoH: nonparameteric kernel methods for constructing home ranges and utilization distributions.

    Get PDF
    Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: "fixed sphere-of-influence," or r-LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an "adaptive sphere-of-influence," or a-LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a), and compare them to the original "fixed-number-of-points," or k-LoCoH (all kernels constructed from k-1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a-LoCoH is generally superior to k- and r-LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu)

    Contingent Kernel Density Estimation

    Get PDF
    Kernel density estimation is a widely used method for estimating a distribution based on a sample of points drawn from that distribution. Generally, in practice some form of error contaminates the sample of observed points. Such error can be the result of imprecise measurements or observation bias. Often this error is negligible and may be disregarded in analysis. In cases where the error is non-negligible, estimation methods should be adjusted to reduce resulting bias. Several modifications of kernel density estimation have been developed to address specific forms of errors. One form of error that has not yet been addressed is the case where observations are nominally placed at the centers of areas from which the points are assumed to have been drawn, where these areas are of varying sizes. In this scenario, the bias arises because the size of the error can vary among points and some subset of points can be known to have smaller error than another subset or the form of the error may change among points. This paper proposes a “contingent kernel density estimation” technique to address this form of error. This new technique adjusts the standard kernel on a point-by-point basis in an adaptive response to changing structure and magnitude of error. In this paper, equations for our contingent kernel technique are derived, the technique is validated using numerical simulations, and an example using the geographic locations of social networking users is worked to demonstrate the utility of the method

    Methods for Comparative Model Selection and Parameter Estimation in Diverse Modeling Applications

    No full text
    Predictive accuracy of a model is of key importance in research and to a lay audience. Diverse modeling methods and parameter estimation methods exist, such that a wide range of techniques are available from which to select when approaching a modeling task. Given this, two questions naturally arise in relation to a modeling task: model selection and model parameter estimation. This dissertation is intended to advance the theory and practice of model selection and parameter estimation for the topics discussed here. * In Chapter 2, I develop A3, a novel method for assessing predictive accuracy and enabling direct comparisons between competing models in an accessible framework. This method uses resampling techniques to "wrap" predictive modeling methods and estimate a standard set of error metrics for both the model as a whole and additionally for each explanatory variable utilized by the model. Two case studies in the chapter illustrate the applied utility of the method and how improved models may not only result in increased predictive accuracy, but also potentially alter inferences and conclusions about the effects of parameters in the model. An R package implementing the method is made available on CRAN.* In Chapter 3, I develop ICE, a novel method of home range estimation. ICE uses a competitive method for estimating home ranges. Effectively, an estimator of estimators, ICE pits existing home range estimators against each other, each of which may be best suited for a given type of data. By selecting between different approaches, ICE can theoretically improve on the performance of any individual estimator across heterogeneous data sets.* In Chapter 4, I develop Contingent Kernel Density Estimation, an extension to Kernel Density Estimation designed to account for the case when observations are measured with a specific form of error. Chapter 4 develops the method and derives contingent kernels for commonly-used kernels and sampling regimes. An application of the method is presented to data collected from the social networking site, Twitter, to estimate the national distribution of a sample of Twitter users.* The study in Chapter 5 analyzes a large data set collected from Twitter. This study is based on data from over four million Twitter users and estimates parameters of this population with a primary focus on color preference choices made by these users. Novel results are found in this "big data" analysis approach that may not have been able to be identified with earlier, traditional approaches of sampling and surveying the behavior of individuals

    Consistent and Clear Reporting of Results from Diverse Modeling Techniques: The A3 Method

    Get PDF
    The measurement and reporting of model error is of basic importance when constructing models. Here, a general method and an R package, A3, are presented to support the assessment and communication of the quality of a model fit along with metrics of variable importance. The presented method is accurate, robust, and adaptable to a wide range of predictive modeling algorithms. The method is described along with case studies and a usage guide. It is shown how the method can be used to obtain more accurate models for prediction and how this may simultaneously lead to altered inferences and conclusions about the impact of potential drivers within a system

    Dynamic risk assessment method – a proposal for assessing risk in water supply system / Metoda dynamicznego szacowania ryzyka – propozycja oceny ryzyka w systemie zaopatrzenia wody

    No full text
    Dynamika Systemów jest metodologią modelowania i analizy złożonych systemów. Taki złożony system może być charakteryzowany przez wzajemne proste połączenia elementów oraz istniejące sprzężenia zwrotne. Przeprowadzenie oceny ryzyka przy modelowaniu Systemów Dynamicznych jest trudnym wyzwaniem. Chociaż w niektórych przypadkach, uzyskane za pomocą symulacji serie wyników mogą się wydawać przypadkowe, to jednak często istnieje wysoki stopień autokorelacji między tymi seriami wynikający z istnienia powiązań zwrotnych w systemie. Artykuł przedstawia propozycje Metody Dynamicznego Szacowania Ryzyka (MDSR), która pozwala oceniać ryzyko związane z hipotetycznymi kosztami zachorowań wywołanymi skażeniem systemu zaopatrzenia w wodę przez Cryptosporidium

    Contingent kernels (<i>C</i>) for combinations of univariate standard kernels (<i>K</i>) and two forms of contingency distributions (Ψ).

    No full text
    <p>Contingent kernels (<i>C</i>) for combinations of univariate standard kernels (<i>K</i>) and two forms of contingency distributions (Ψ).</p
    corecore